Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 4.334
Filtrar
1.
Sensors (Basel) ; 24(7)2024 Apr 04.
Artículo en Inglés | MEDLINE | ID: mdl-38610510

RESUMEN

The perception of sound greatly impacts users' emotional states, expectations, affective relationships with products, and purchase decisions. Consequently, assessing the perceived quality of sounds through jury testing is crucial in product design. However, the subjective nature of jurors' responses may limit the accuracy and reliability of jury test outcomes. This research explores the utility of facial expression analysis in jury testing to enhance response reliability and mitigate subjectivity. Some quantitative indicators allow the research hypothesis to be validated, such as the correlation between jurors' emotional responses and valence values, the accuracy of jury tests, and the disparities between jurors' questionnaire responses and the emotions measured by FER (facial expression recognition). Specifically, analysis of attention levels during different statuses reveals a discernible decrease in attention levels, with 70 percent of jurors exhibiting reduced attention levels in the 'distracted' state and 62 percent in the 'heavy-eyed' state. On the other hand, regression analysis shows that the correlation between jurors' valence and their choices in the jury test increases when considering the data where the jurors are attentive. The correlation highlights the potential of facial expression analysis as a reliable tool for assessing juror engagement. The findings suggest that integrating facial expression recognition can enhance the accuracy of jury testing in product design by providing a more dependable assessment of user responses and deeper insights into participants' reactions to auditory stimuli.


Asunto(s)
Reconocimiento Facial , Humanos , Reproducibilidad de los Resultados , Acústica , Sonido , Emociones
2.
Artículo en Inglés | MEDLINE | ID: mdl-38607744

RESUMEN

The purpose of this work is to analyze how new technologies can enhance clinical practice while also examining the physical traits of emotional expressiveness of face expression in a number of psychiatric illnesses. Hence, in this work, an automatic facial expression recognition system has been proposed that analyzes static, sequential, or video facial images from medical healthcare data to detect emotions in people's facial regions. The proposed method has been implemented in five steps. The first step is image preprocessing, where a facial region of interest has been segmented from the input image. The second component includes a classical deep feature representation and the quantum part that involves successive sets of quantum convolutional layers followed by random quantum variational circuits for feature learning. Here, the proposed system has attained a faster training approach using the proposed quantum convolutional neural network approach that takes [Formula: see text] time. In contrast, the classical convolutional neural network models have [Formula: see text] time. Additionally, some performance improvement techniques, such as image augmentation, fine-tuning, matrix normalization, and transfer learning methods, have been applied to the recognition system. Finally, the scores due to classical and quantum deep learning models are fused to improve the performance of the proposed method. Extensive experimentation with Karolinska-directed emotional faces (KDEF), Static Facial Expressions in the Wild (SFEW 2.0), and Facial Expression Recognition 2013 (FER-2013) benchmark databases and compared with other state-of-the-art methods that show the improvement of the proposed system.


Asunto(s)
Reconocimiento Facial , Salud Mental , Humanos , Benchmarking , Bases de Datos Factuales , Redes Neurales de la Computación
3.
Sci Rep ; 14(1): 8121, 2024 04 07.
Artículo en Inglés | MEDLINE | ID: mdl-38582772

RESUMEN

This paper proposes an improved strategy for the MobileNetV2 neural network(I-MobileNetV2) in response to problems such as large parameter quantities in existing deep convolutional neural networks and the shortcomings of the lightweight neural network MobileNetV2 such as easy loss of feature information, poor real-time performance, and low accuracy rate in facial emotion recognition tasks. The network inherits the characteristics of MobilenetV2 depthwise separated convolution, signifying a reduction in computational load while maintaining a lightweight profile. It utilizes a reverse fusion mechanism to retain negative features, which makes the information less likely to be lost. The SELU activation function is used to replace the RELU6 activation function to avoid gradient vanishing. Meanwhile, to improve the feature recognition capability, the channel attention mechanism (Squeeze-and-Excitation Networks (SE-Net)) is integrated into the MobilenetV2 network. Experiments conducted on the facial expression datasets FER2013 and CK + showed that the proposed network model achieved facial expression recognition accuracies of 68.62% and 95.96%, improving upon the MobileNetV2 model by 0.72% and 6.14% respectively, and the parameter count decreased by 83.8%. These results empirically verify the effectiveness of the improvements made to the network model.


Asunto(s)
Lesiones Accidentales , Reconocimiento Facial , Humanos , Redes Neurales de la Computación , Reconocimiento en Psicología
4.
IEEE Trans Image Process ; 33: 2514-2529, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38530732

RESUMEN

Convolutional neural networks (CNNs) have achieved significant improvement for the task of facial expression recognition. However, current training still suffers from the inconsistent learning intensities among different layers, i.e., the feature representations in the shallow layers are not sufficiently learned compared with those in deep layers. To this end, this work proposes a contrastive learning framework to align the feature semantics of shallow and deep layers, followed by an attention module for representing the multi-scale features in the weight-adaptive manner. The proposed algorithm has three main merits. First, the learning intensity, defined as the magnitude of the backpropagation gradient, of the features on the shallow layer is enhanced by cross-layer contrastive learning. Second, the latent semantics in the shallow-layer and deep-layer features are explored and aligned in the contrastive learning, and thus the fine-grained characteristics of expressions can be taken into account for the feature representation learning. Third, by integrating the multi-scale features from multiple layers with an attention module, our algorithm achieved the state-of-the-art performances, i.e. 92.21%, 89.50%, 62.82%, on three in-the-wild expression databases, i.e. RAF-DB, FERPlus, SFEW, and the second best performance, i.e. 65.29% on AffectNet dataset. Our codes will be made publicly available.


Asunto(s)
Reconocimiento Facial , Semántica , Aprendizaje , Algoritmos , Bases de Datos Factuales
5.
J Anim Sci ; 1022024 Jan 03.
Artículo en Inglés | MEDLINE | ID: mdl-38477672

RESUMEN

The accurate identification of individual sheep is a crucial prerequisite for establishing digital sheep farms and precision livestock farming. Currently, deep learning technology provides an efficient and non-contact method for sheep identity recognition. In particular, convolutional neural networks can be used to learn features of sheep faces to determine their corresponding identities. However, the existing sheep face recognition models face problems such as large model size, and high computational costs, making it difficult to meet the requirements of practical applications. In response to these issues, we introduce a lightweight sheep face recognition model called YOLOv7-Sheep Face Recognition (YOLOv7-SFR). Considering the labor-intensive nature associated with manually capturing sheep face images, we developed a face image recording channel to streamline the process and improve efficiency. This study collected facial images of 50 Small-tailed Han sheep through a recording channel. The experimental sheep ranged in age from 1 to 3 yr, with an average weight of 63.1 kg. Employing data augmentation methods further enhanced the original images, resulting in a total of 22,000 sheep face images. Ultimately, a sheep face dataset was established. To achieve lightweight improvement and improve the performance of the recognition model, a variety of improvement strategies were adopted. Specifically, we introduced the shuffle attention module into the backbone and fused the Dyhead module with the model's detection head. By combining multiple attention mechanisms, we improved the model's ability to learn target features. Additionally, the traditional convolutions in the backbone and neck were replaced with depthwise separable convolutions. Finally, leveraging knowledge distillation, we enhanced its performance further by employing You Only Look Once version 7 (YOLOv7) as the teacher model and YOLOv7-SFR as the student model. The training results indicate that our proposed approach achieved the best performance on the sheep face dataset, with a mean average precision@0.5 of 96.9%. The model size and average recognition time were 11.3 MB and 3.6 ms, respectively. Compared to YOLOv7-tiny, YOLOv7-SFR showed a 2.1% improvement in mean average precision@0.5, along with a 5.8% reduction in model size and a 42.9% reduction in average recognition time. The research results are expected to drive the practical applications of sheep face recognition technology.


Accurate identification of individual sheep is a crucial prerequisite for establishing digital sheep farms and precision livestock farming. In this study, we developed a lightweight sheep face recognition model, YOLOv7-SFR. Utilizing a face image recording channel, we efficiently collected facial images from 50 experimental sheep, resulting in a comprehensive sheep face dataset. Training results demonstrated that YOLOv7-SFR surpassed state-of-the-art lightweight sheep face recognition models, achieving a mean average precision@0.5 of 96.9%. Notably, the model size and average recognition time of YOLOv7-SFR were merely 11.3 MB and 3.6 ms, respectively. In summary, YOLOv7-SFR strikes an optimal balance between performance, model size, and recognition speed, offering promising practical applications for sheep face recognition technology. This study employs deep learning for sheep face recognition tasks, ensuring the welfare of sheep in the realm of digital agriculture and automation practices.


Asunto(s)
Reconocimiento Facial , Trabajo de Parto , Animales , Ovinos , Embarazo , Femenino , Agricultura , Granjas , Ganado
6.
Math Biosci Eng ; 21(3): 4165-4186, 2024 Feb 26.
Artículo en Inglés | MEDLINE | ID: mdl-38549323

RESUMEN

In recent years, the extensive use of facial recognition technology has raised concerns about data privacy and security for various applications, such as improving security and streamlining attendance systems and smartphone access. In this study, a blockchain-based decentralized facial recognition system (DFRS) that has been designed to overcome the complexities of technology. The DFRS takes a trailblazing approach, focusing on finding a critical balance between the benefits of facial recognition and the protection of individuals' private rights in an era of increasing monitoring. First, the facial traits are segmented into separate clusters which are maintained by the specialized node that maintains the data privacy and security. After that, the data obfuscation is done by using generative adversarial networks. To ensure the security and authenticity of the data, the facial data is encoded and stored in the blockchain. The proposed system achieves significant results on the CelebA dataset, which shows the effectiveness of the proposed approach. The proposed model has demonstrated enhanced efficacy over existing methods, attaining 99.80% accuracy on the dataset. The study's results emphasize the system's efficacy, especially in biometrics and privacy-focused applications, demonstrating outstanding precision and efficiency during its implementation. This research provides a complete and novel solution for secure facial recognition and data security for privacy protection.


Asunto(s)
Cadena de Bloques , Aprendizaje Profundo , Reconocimiento Facial , Humanos , Privacidad , Fenotipo
7.
Neuroimage ; 291: 120591, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38552812

RESUMEN

Functional imaging has helped to understand the role of the human insula as a major processing network for integrating input with the current state of the body. However, these studies remain at a correlative level. Studies that have examined insula damage show lesion-specific performance deficits. Case reports have provided anecdotal evidence for deficits following insula damage, but group lesion studies offer a number of advances in providing evidence for functional representation of the insula. We conducted a systematic literature search to review group studies of patients with insula damage after stroke and identified 23 studies that tested emotional processing performance in these patients. Eight of these studies assessed emotional processing of visual (most commonly IAPS), auditory (e.g., prosody), somatosensory (emotional touch) and autonomic function (heart rate variability). Fifteen other studies looked at social processing, including emotional face recognition, gaming tasks and tests of empathy. Overall, there was a bias towards testing only patients with right-hemispheric lesions, making it difficult to consider hemisphere specificity. Although many studies included an overlay of lesion maps to characterise their patients, most did not differentiate lesion statistics between insula subunits and/or applied voxel-based associations between lesion location and impairment. This is probably due to small group sizes, which limit statistical comparisons. We conclude that multicentre analyses of lesion studies with comparable patients and performance tests are needed to definitively test the specific function of parts of the insula in emotional processing and social interaction.


Asunto(s)
Reconocimiento Facial , Accidente Cerebrovascular , Humanos , Imagen por Resonancia Magnética/métodos , Emociones/fisiología , Accidente Cerebrovascular/complicaciones , Accidente Cerebrovascular/diagnóstico por imagen , Empatía , Mapeo Encefálico/métodos
8.
Addict Behav ; 153: 108006, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38457987

RESUMEN

Previous research has found that individuals with Internet gaming disorder (IGD) show different patterns of social function impairments in game-related and real-life social contexts. Impaired social reward processing may be the underlying mechanism according to the Social Motivation Theory. Thus, in this study, event-related potentials were recorded from 24 individuals with IGD and 24 healthy gamers during a social judgement task. We focused on reward positivity (RewP) elicited by game-related and real-life social rewards, and N170 elicited by game avatar faces and real faces. These indicators were used to explore the neurocognitive mechanism of impaired social reward processing in individuals with IGD and its relationship with early face perception. Results showed that (1) the RewP elicited by real-life social reward was considerably reduced in individuals with IGD relative to healthy gamers. (2) The N170 elicited by game avatar faces in individuals with IGD was larger than that elicited by real faces. However, the N170 was not associated with RewP in either group. (3) The score for IGD severity was correlated with the RewP elicited by real-life social reward and the N170 elicited by game avatar face. In conclusion, the present study suggests that the impaired social reward processing in individuals with IGD is mainly manifested in a decreased neural sensitivity to real-life social reward. Meanwhile, the reduced RewP elicited by real-life social reward and the enhanced N170 elicited by game avatar face might serve as potential biomarkers for IGD.


Asunto(s)
Conducta Adictiva , Reconocimiento Facial , Juegos de Video , Humanos , Encéfalo , Mapeo Encefálico , Trastorno de Adicción a Internet , Conducta Adictiva/psicología , Imagen por Resonancia Magnética/métodos , Recompensa , Internet , Juegos de Video/psicología
9.
IEEE Trans Image Process ; 33: 2293-2304, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38470591

RESUMEN

Human emotions contain both basic and compound facial expressions. In many practical scenarios, it is difficult to access all the compound expression categories at one time. In this paper, we investigate comprehensive facial expression recognition (FER) in the class-incremental learning paradigm, where we define well-studied and easily-accessible basic expressions as initial classes and learn new compound expressions incrementally. To alleviate the stability-plasticity dilemma in our incremental task, we propose a novel Relationship-Guided Knowledge Transfer (RGKT) method for class-incremental FER. Specifically, we develop a multi-region feature learning (MFL) module to extract fine-grained features for capturing subtle differences in expressions. Based on the MFL module, we further design a basic expression-oriented knowledge transfer (BET) module and a compound expression-oriented knowledge transfer (CET) module, by effectively exploiting the relationship across expressions. The BET module initializes the new compound expression classifiers based on expression relevance between basic and compound expressions, improving the plasticity of our model to learn new classes. The CET module transfers expression-generic knowledge learned from new compound expressions to enrich the feature set of old expressions, facilitating the stability of our model against forgetting old classes. Extensive experiments on three facial expression databases show that our method achieves superior performance in comparison with several state-of-the-art methods.


Asunto(s)
Reconocimiento Facial , Humanos , Emociones , Aprendizaje , Expresión Facial , Bases de Datos Factuales
10.
Sci Rep ; 14(1): 6626, 2024 03 19.
Artículo en Inglés | MEDLINE | ID: mdl-38503841

RESUMEN

Developmental prosopagnosia (DP) is characterised by deficits in face identification. However, there is debate about whether these deficits are primarily perceptual, and whether they extend to other face processing tasks (e.g., identifying emotion, age, and gender; detecting faces in scenes). In this study, 30 participants with DP and 75 controls completed a battery of eight tasks assessing four domains of face perception (identity; emotion; age and gender; face detection). The DP group performed worse than the control group on both identity perception tasks, and one task from each other domain. Both identity perception tests uniquely predicted DP/control group membership, and performance on two measures of face memory. These findings suggest that deficits in DP may arise from issues with face perception. Some non-identity tasks also predicted DP/control group membership and face memory, even when face identity perception was accounted for. Gender perception and speed of face detection consistently predicted unique variance in group membership and face memory; several other tasks were only associated with some measures of face recognition ability. These findings indicate that face perception deficits in DP may extend beyond identity perception. However, the associations between tasks may also reflect subtle aspects of task demands or stimuli.


Asunto(s)
Reconocimiento Facial , Prosopagnosia , Humanos , Prosopagnosia/diagnóstico , Emociones , Reconocimiento Visual de Modelos
11.
Sci Rep ; 14(1): 6687, 2024 03 20.
Artículo en Inglés | MEDLINE | ID: mdl-38509151

RESUMEN

Congenital Prosopagnosia (CP) is an innate impairment in face perception with heterogeneous characteristics. It is still unclear if and to what degree holistic processing of faces is disrupted in CP. Such disruption would be expected to lead to a focus on local features of the face. In this study, we used binocular rivalry (BR) to implicitly measure face perception in conditions that favour holistic or local processing. The underlying assumption is that if stimulus saliency affects the perceptual dominance of a given stimulus in BR, one can deduce how salient a stimulus is for a given group (here: participants with and without CP) based on the measured perceptual dominance. A further open question is whether the deficit in face processing in CP extends to the processing of the facial display of emotions. In experiment 1, we compared predominance of upright and inverted faces displaying different emotions (fearful, happy, neutral) vs. houses between participants with CP (N = 21) and with normal face perception (N = 21). The results suggest that CP observers process emotions in faces automatically but rely more on local features than controls. The inversion of faces, which is supposed to disturb holistic processing, affected controls in a more pronounced way than participants with CP. In experiment 2, we introduced the Thatcher effect in BR by inverting the eye and mouth regions of the presented faces in the hope of further increasing the effect of face inversion. However, our expectations were not borne out by the results. Critically, both experiments showed that inversion effects were more pronounced in controls than in CP, suggesting that holistic face processing is less relevant in CP. We find BR to be a useful implicit test for assessing visual processing specificities in neurological participants.


Asunto(s)
Reconocimiento Facial , Prosopagnosia , Prosopagnosia/congénito , Humanos , Prosopagnosia/psicología , Reconocimiento Visual de Modelos , Percepción Visual , Estimulación Luminosa
12.
Cereb Cortex ; 34(3)2024 03 01.
Artículo en Inglés | MEDLINE | ID: mdl-38451300

RESUMEN

Although previous studies have reported the sex differences in behavior/cognition and the brain, the sex difference in the relationship between memory abilities and the underlying neural basis in the aging process remains unclear. In this study, we used a machine learning model to estimate the association between cortical thickness and verbal/visuospatial memory in females and males and then explored the sex difference of these associations based on a community-elderly cohort (n = 1153, age ranged from 50.42 to 86.67 years). We validated that females outperformed males in verbal memory, while males outperformed females in visuospatial memory. The key regions related to verbal memory in females include the medial temporal cortex, orbitofrontal cortex, and some regions around the insula. Further, those regions are more located in limbic, dorsal attention, and default-model networks, and are associated with face recognition and perception. The key regions related to visuospatial memory include the lateral prefrontal cortex, anterior cingulate gyrus, and some occipital regions. They overlapped more with dorsal attention, frontoparietal and visual networks, and were associated with object recognition. These findings imply the memory performance advantage of females and males might be related to the different memory processing tendencies and their associated network.


Asunto(s)
Reconocimiento Facial , Caracteres Sexuales , Anciano , Humanos , Femenino , Masculino , Persona de Mediana Edad , Anciano de 80 o más Años , Encéfalo , Cognición , Citoplasma
13.
PLoS One ; 19(3): e0300973, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38512901

RESUMEN

OBJECTIVE: Most previous studies have examined emotion recognition in autism spectrum condition (ASC) without intellectual disability (ID). However, ASC and ID co-occur to a high degree. The main aims of the study were to examine emotion recognition in individuals with ASC and co-occurring intellectual disability (ASC-ID) as compared to individuals with ID alone, and to investigate the relationship between emotion recognition and social functioning. METHODS: The sample consisted of 30 adult participants with ASC-ID and a comparison group of 29 participants with ID. Emotion recognition was assessed by the facial emotions test, while. social functioning was assessed by the social responsiveness scale-second edition (SRS-2). RESULTS: The accuracy of emotion recognition was significantly lower in individuals with ASC-ID compared to the control group with ID, especially when it came to identifying angry and fearful emotions. Participants with ASC-ID exhibited more pronounced difficulties in social functioning compared to those with ID, and there was a significant negative correlation between emotion recognition and social functioning. However, emotion recognition accounted for only 8% of the variability observed in social functioning. CONCLUSION: Our data indicate severe difficulties in the social-perceptual domain and in everyday social functioning in individuals with ASC-ID.


Asunto(s)
Trastorno del Espectro Autista , Trastorno Autístico , Reconocimiento Facial , Discapacidad Intelectual , Adulto , Humanos , Trastorno Autístico/psicología , Interacción Social , Discapacidad Intelectual/psicología , Emociones , Trastorno del Espectro Autista/psicología , Expresión Facial
14.
Neuroimage Clin ; 41: 103586, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38428325

RESUMEN

BACKGROUND: Emotion processing deficits are known to accompany depressive symptoms and are often seen in stroke patients. Little is known about the influence of post-stroke depressive (PSD) symptoms and specific brain lesions on altered emotion processing abilities and how these phenomena develop over time. This potential relationship may impact post-stroke rehabilitation of neurological and psychosocial function. To address this scientific gap, we investigated the relationship between PSD symptoms and emotion processing abilities in a longitudinal study design from the first days post-stroke into the early chronic phase. METHODS: Twenty-six ischemic stroke patients performed an emotion processing task on videos with emotional faces ('happy,' 'sad,' 'anger,' 'fear,' and 'neutral') at different intensity levels (20%, 40%, 60%, 80%, 100%). Recognition accuracies and response times were measured, as well as scores of depressive symptoms (Montgomery-Åsberg Depression Rating Scale). Twenty-eight healthy participants matched in age and sex were included as a control group. Whole-brain support-vector regression lesion-symptom mapping (SVR-LSM) analyses were performed to investigate whether specific lesion locations were associated with the recognition accuracy of specific emotion categories. RESULTS: Stroke patients performed worse in overall recognition accuracy compared to controls, specifically in the recognition of happy, sad, and fearful faces. Notably, more depressed stroke patients showed an increased processing towards specific negative emotions, as they responded significantly faster to angry faces and recognized sad faces of low intensities significantly more accurately. These effects obtained for the first days after stroke partly persisted to follow-up assessment several months later. SVR-LSM analyses revealed that inferior and middle frontal regions (IFG/MFG) and insula and putamen were associated with emotion-recognition deficits in stroke. Specifically, recognizing happy facial expressions was influenced by lesions affecting the anterior insula, putamen, IFG, MFG, orbitofrontal cortex, and rolandic operculum. Lesions in the posterior insula, rolandic operculum, and MFG were also related to reduced recognition accuracy of fearful facial expressions, whereas recognition deficits of sad faces were associated with frontal pole, IFG, and MFG damage. CONCLUSION: PSD symptoms facilitate processing negative emotional stimuli, specifically angry and sad facial expressions. The recognition accuracy of different emotional categories was linked to brain lesions in emotion-related processing circuits, including insula, basal ganglia, IFG, and MFG. In summary, our study provides support for psychosocial and neural factors underlying emotional processing after stroke, contributing to the pathophysiology of PSD.


Asunto(s)
Depresión , Reconocimiento Facial , Humanos , Estudios Longitudinales , Emociones/fisiología , Ira , Encéfalo/diagnóstico por imagen , Expresión Facial , Reconocimiento Facial/fisiología
15.
Dev Psychol ; 60(4): 649-664, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38483484

RESUMEN

Adolescence is a critical developmental period that is marked by drastic changes in face recognition, which are reflected in patterns of bias (i.e., superior recognition for some individuals compared to others). Here, we evaluate how race is perceived during face recognition and whether adolescents exhibit an own-race bias (ORB). We conducted a Bayesian meta-analysis to estimate the summary effect size of the ORB across 16 unique studies (38 effect sizes) with 1,321 adolescent participants between the ages of ∼10-22 years of age. This meta-analytic approach allowed us to inform the analysis with prior findings from the adult literature and evaluate how well they fit the adolescent literature. We report a positive, small ORB (Hedges's g = 0.24) that was evident under increasing levels of uncertainty in the analysis. The magnitude of the ORB was not systematically impacted by participant age or race, which is inconsistent with predictions from perceptual expertise and social cognitive theories. Critically, our findings are limited in generalizability by the study samples, which largely include White adolescents in White-dominant countries. Future longitudinal studies that include racially diverse samples and measure social context, perceiver motivation, peer reorientation, social network composition, and ethnic-racial identity development are critical for understanding the presence, magnitude, and relative flexibility of the ORB in adolescence. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Asunto(s)
Reconocimiento Facial , Grupos Raciales , Adolescente , Niño , Humanos , Adulto Joven , Teorema de Bayes , Grupo Paritario , Reconocimiento en Psicología
16.
BMC Psychiatry ; 24(1): 226, 2024 Mar 26.
Artículo en Inglés | MEDLINE | ID: mdl-38532335

RESUMEN

BACKGROUND: Patients with schizophrenia (SCZ) exhibit difficulties deficits in recognizing facial expressions with unambiguous valence. However, only a limited number of studies have examined how these patients fare in interpreting facial expressions with ambiguous valence (for example, surprise). Thus, we aimed to explore the influence of emotional background information on the recognition of ambiguous facial expressions in SCZ. METHODS: A 3 (emotion: negative, neutral, and positive) × 2 (group: healthy controls and SCZ) experimental design was adopted in the present study. The experimental materials consisted of 36 images of negative emotions, 36 images of neutral emotions, 36 images of positive emotions, and 36 images of surprised facial expressions. In each trial, a briefly presented surprised face was preceded by an affective image. Participants (36 SCZ and 36 healthy controls (HC)) were required to rate their emotional experience induced by the surprised facial expressions. Participants' emotional experience was measured using the 9-point rating scale. The experimental data have been analyzed by conducting analyses of variances (ANOVAs) and correlation analysis. RESULTS: First, the SCZ group reported a more positive emotional experience under the positive cued condition compared to the negative cued condition. Meanwhile, the HC group reported the strongest positive emotional experience in the positive cued condition, a moderate experience in the neutral cued condition, and the weakest in the negative cue condition. Second, the SCZ (vs. HC) group showed longer reaction times (RTs) for recognizing surprised facial expressions. The severity of schizophrenia symptoms in the SCZ group was negatively correlated with their rating scores for emotional experience under neutral and positive cued condition. CONCLUSIONS: Recognition of surprised facial expressions was influenced by background information in both SCZ and HC, and the negative symptoms in SCZ. The present study indicates that the role of background information should be fully considered when examining the ability of SCZ to recognize ambiguous facial expressions.


Asunto(s)
Reconocimiento Facial , Esquizofrenia , Humanos , Emociones , Reconocimiento en Psicología , Expresión Facial , China
17.
J Integr Neurosci ; 23(3): 48, 2024 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-38538212

RESUMEN

In the context of perceiving individuals within and outside of social groups, there are distinct cognitive processes and mechanisms in the brain. Extensive research in recent years has delved into the neural mechanisms that underlie differences in how we perceive individuals from different social groups. To gain a deeper understanding of these neural mechanisms, we present a comprehensive review from the perspectives of facial recognition and memory, intergroup identification, empathy, and pro-social behavior. Specifically, we focus on studies that utilize functional magnetic resonance imaging (fMRI) and event-related potential (ERP) techniques to explore the relationship between brain regions and behavior. Findings from fMRI studies reveal that the brain regions associated with intergroup differentiation in perception and behavior do not operate independently but instead exhibit dynamic interactions. Similarly, ERP studies indicate that the amplitude of neural responses shows various combinations in relation to perception and behavior.


Asunto(s)
Empatía , Reconocimiento Facial , Humanos , Imagen por Resonancia Magnética , Encéfalo/fisiología , Potenciales Evocados/fisiología , Mapeo Encefálico , Conducta Social
18.
PLoS One ; 19(3): e0297050, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38517878

RESUMEN

The study on the impact of consumer purchase intention on luxury goods has received widespread attention from the academic community. This study collected research data in Guilin, China, through questionnaire survey, and conducted an empirical study on the influencing factors of luxury consumers' purchase intention. The results show: The price level of luxury goods has a positive impact on consumers' face perception, while the positive impact of price level on expected regret has not been verified. Consumer's face perception has positive and negative effects on consumers' expected regret and consumers' purchase intention respectively. Consumer's downward expected regret and consumer's upward expected regret have different effects on consumers' purchase intention. Consumers' face perception and expected regret play a mediating effect in the research of influence relationship. This study is conducive to a better analysis of the psychology and behavior of Chinese luxury consumers, enriching the theoretical connotation of consumer psychology, and promoting the healthy development of the luxury goods industry.


Asunto(s)
Reconocimiento Facial , Intención , Emociones , Comportamiento del Consumidor , Procesos Mentales
19.
Cortex ; 173: 283-295, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38442567

RESUMEN

Evidence suggests that some patients with isolated hippocampal damage appear to present with selective preservation of unfamiliar face recognition relative to other kinds of visual test stimuli (e.g., words). Bird and Burgess (2008) formulated a review and secondary analysis of a group of 10 cases all tested on a clinical assessment of word and face recognition memory (RMT, Warrington, 1984), which confirmed the key memory dissociation at the group level. The current work provides an updated secondary analysis of such cases with a larger published sample (N = 52). In addition to group-level analyses, we also re-evaluate evidence using a single case statistical approach (Crawford & Garthwaite, 2005), enabling us to determine how many would make criteria for a 'classical dissociation' (Crawford, Garthwaite, & Gray, 2003). Overall, group-level analyses indicated the key pattern of significant differences confined to words was limited to small control sample comparisons. When using the large control sample provided by Bird and Burgess (2008), hippocampal cases as a group were significantly poorer for both classes of items. Furthermore, our single-case approach indicated few had a performance pattern of a relative difference across face > word categories that would meet statistical significance; namely within individual differences across categories that would warrant a significant 'classical dissociation'. Moreover, these analyses also found several cases with a 'classical dissociation' in the reverse direction: namely preserved recognition of words. Such analyses serve to demonstrate the need for a more conservative statistical approach to be undertaken when reporting selective 'preservation' of a category in recognition memory. Whilst material specificity has important implications for understanding the role of the hippocampus in memory, our results highlight the need for statistical methods to be unquestionably rigorous before any claims are made. Lastly, we highlight other methodological issues critical to group analyses and make suggestions for future work.


Asunto(s)
Reconocimiento Facial , Humanos , Reconocimiento en Psicología , Amnesia , Hipocampo , Individualidad , Reconocimiento Visual de Modelos
20.
Cortex ; 173: 333-338, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38460488

RESUMEN

Developmental prosopagnosia (DP) is characterised by difficulties recognising face identities and is associated with diverse co-occurring object recognition difficulties. The high co-occurrence rate and heterogeneity of associated difficulties in DP is an intrinsic feature of developmental conditions, where co-occurrence of difficulties is the rule, rather than the exception. However, despite its name, cognitive and neural theories of DP rarely consider the developmental context in which these difficulties occur. This leaves a large gap in our understanding of how DP emerges in light of the developmental trajectory of face recognition. Here, we argue that progress in the field requires re-considering the developmental origins of differences in face recognition abilities, rather than studying the end-state alone. In practice, considering development in DP necessitates a re-evaluation of current approaches in recruitment, design, and analyses.


Asunto(s)
Reconocimiento Facial , Prosopagnosia , Humanos , Percepción Visual , Reconocimiento Visual de Modelos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...